perm filename HANKS[W86,JMC] blob sn#807055 filedate 1986-01-17 generic text, type C, neo UTF8
COMMENT āŠ—   VALID 00002 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00002 00002	hanks[w86,jmc]		Notes on "Temporal Reasoning and Default Logics" (key: nonmonotonic)
C00011 ENDMK
CāŠ—;
hanks[w86,jmc]		Notes on "Temporal Reasoning and Default Logics" (key: nonmonotonic)

1. Suppose we know that a certain phenomenon normally persists
for a definite length of time, e.g. night persists till dawn.
We might consider handling this by a general default rule that
phenomena persist and introducing dawn as an abnormality or
we might incorporate the termination in the default description
of the phenomenon.  My intuition is that we win by incorporating
any definite knowledge we have in the default description.  The
idea is that this will keep the tower of abnormalities simpler.

2. p.20 - I don't consider that the Nixon example ``screws up''
circumscription, because it isn't my goal to get a unique minimal
model in all cases.  It is simply a mistake to say that for
circumscription ``we need a unique one (minimal model) in order to
guarantee that we can make coherent deductions from the theory''.

	If we have the rules that Quakers are normally
pacifists and Republicans are normally non-pacifists in a general
common sense database and we minimize abnormality, then we will get
the right result for individuals only known to be exactly one or
the other and the right result for individuals not known to be
either or known to be both.  It is important that minimizing
abnormality on a large collection of facts answer those questions
it can and pass on those it can't.  Circumscription of abnormality
does just what it should in the Nixon case.  However, there are
other cases where the present version of circumscription combined
with the present abnormality theories doesn't do what we want.
Whether we have to modify circumscription or the theories or both
isn't presently clear.

3. p22 - I have no problem with his 1,2,3 and 5, but 4 (``beginning
to believe a particular fact may cause us to stop believing a
contrary fact'').  The others refer to physical event, but this
seems to refer to a mental event.  Can they really mean ``believing
that a certain fact becomes true at a certain time implies believing
that a contrary fact stops being true at that time''?  This would
separate the time of believing from the time of the event.

mcdermott@yale,hanks@yale,val/cc,grosof@sushi/cc
temporal reasoning and default logic
	I've been reading your report with Hanks, and I think it presents
important problems for AI.  I haven't understood all of it, and probably
I'll have a reaction to its main contentions later.  As you know Vladimir
Lifschitz (and also Ben Grosof) has a treatment of your problem with his
new pointwise circumscription which was devised in connection with a
rather similar example from the blocks world.  I don't know whether my
intuition will agree with his solution, and I haven't given up on ordinary
circumscription in connection with possible revised axiomatizations of the
phenomena.

	I am surprised at the generality of your pessimistic conclusions
about logic on the small amount of evidence you have.  I wouldn't exclude
the possibility that all of the threee non-monotonic formalisms can handle
the problem using revised axioms, e.g. by reifying causes.  Even
if I couldn't see a way of doing it in logic, I would suppose that
others might.  The only way of reaching a definite negative conclusion
would be to find a general property of the reasoning required that
logic-based systems don't have.  Of course, this has already been
done once, when we discovered that non-monotonicity was required
and everything previously regarded as logic was monotonic.

	There is, however, one definite technical point I can make
now wherein there is a technical difference between circumscription
and the other two systems.  Namely, circumscription doesn't require
that a unique model exists.  Here are my notes in connection with
the Nixon example.

p. 20 - I don't consider that the Nixon example ``screws up''
circumscription, because it isn't my goal to get a unique minimal
model in all cases.  It is simply a mistake to say that for
circumscription ``we need a unique one (minimal model) in order to
guarantee that we can make coherent deductions from the theory''.

	If we have the rules that Quakers are normally
pacifists and Republicans are normally non-pacifists in a general
common sense database and we minimize abnormality, then we will get
the right result for individuals only known to be exactly one or
the other and the right result for individuals not known to be
either or known to be both.  It is important that minimizing
abnormality on a large collection of facts answer those questions
it can and pass on those it can't.  Circumscription of abnormality
does just what it should in the Nixon case.  However, there are
other cases where the present version of circumscription combined
with the present abnormality theories doesn't do what we want.
Whether we have to modify circumscription or the theories or both
isn't presently clear.

	For example, if we have the additional facts that Republicans
are normally conservative and that Quakers normally affirm rather
than swear we will conclude that Nixon has these properties from
the same circumscription that offers no answer as to whether Nixon
is a pacifist.

	Of course, this doesn't answer the main problem you have
posed, and I hope either to become satisfied with Lifschitz's
treatment or find one I like better.